-
Notifications
You must be signed in to change notification settings - Fork 3.1k
fix: extract reasoning_content from summary for DeepSeek compatibility #2245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
openai#2155) When using LiteLLM with DeepSeek in thinking mode, multi-turn conversations with tool calls fail because the API requires the `reasoning_content` field in assistant messages. This change: - Tracks `pending_reasoning_content` across reasoning and assistant messages - Extracts reasoning text from the `summary` field of reasoning messages - Includes `reasoning_content` in assistant messages that have tool calls Fixes openai#2155 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| summary_items = reasoning_item.get("summary", []) | ||
| for summary_item in summary_items: | ||
| if isinstance(summary_item, dict) and summary_item.get("type") == "summary_text": | ||
| pending_reasoning_content = summary_item.get("text", "") | ||
| break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concatenate all summary_text parts before setting reasoning_content
The reasoning summary is a list of parts, but the loop breaks on the first summary_text. If a reasoning item contains multiple summary_text entries (e.g., summaries emitted in multiple parts), only the first chunk is copied into reasoning_content, so the assistant message history will carry a truncated reasoning payload. This can still break DeepSeek continuity because the required reasoning content is incomplete. Consider concatenating all summary_text parts in order instead of stopping at the first one.
Useful? React with 👍 / 👎.
|
There is another PR #2156 that is also working on this issue. That PR intentionally limits the changes to DeepSeek, whereas this PR does not appear to check for that condition. |
|
This PR is stale because it has been open for 10 days with no activity. |
Summary
reasoning_contenon tool calls #2155 where LiteLLM with DeepSeek in thinking mode fails with multi-turn conversations that include tool callsreasoning_contentfrom thesummaryfield of reasoning messagesreasoning_contentin assistant messages that have tool callsProblem
When using the OpenAI Agents SDK with LiteLLM configured for DeepSeek's thinking mode (
deepseek/deepseek-reasoner), multi-turn conversations with tool calls fail because:reasoning_contentin the reasoning item'ssummaryfieldreasoning_contentfield is missing from assistant messagesSolution
The fix tracks
pending_reasoning_contentacross message conversion:summaryfield (which containssummary_textitems)pending_reasoning_contentreasoning_contentTest plan
🤖 Generated with Claude Code